Goto

Collaborating Authors

 variational inverse control


Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

Neural Information Processing Systems

The design of a reward function often poses a major practical challenge to real-world applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose inverse event-based control, which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent's goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on high-dimensional observations like images where rewards are hard or even impossible to specify.


Reviews: Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

Neural Information Processing Systems

The paper proposes a method that alternates between learning a reward function and learning a policy. Algorithmically, the proposed method resembles inverse reinforcement learning/imitation learning. However, unlike existing methods that requires expert trajectories, the proposed method only requires goal states that the expert aims to reach. Experiments show that the proposed method reaches the goal states more accurately than an RL method with a naïve binary classification reward.


Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

Fu, Justin, Singh, Avi, Ghosh, Dibya, Yang, Larry, Levine, Sergey

Neural Information Processing Systems

The design of a reward function often poses a major practical challenge to real-world applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose inverse event-based control, which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent's goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on high-dimensional observations like images where rewards are hard or even impossible to specify.


Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

Fu, Justin, Singh, Avi, Ghosh, Dibya, Yang, Larry, Levine, Sergey

Neural Information Processing Systems

The design of a reward function often poses a major practical challenge to real-world applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose inverse event-based control, which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent's goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on high-dimensional observations like images where rewards are hard or even impossible to specify.